32 research outputs found

    Integrating Perceptual Signal Features within a Multi-facetted Conceptual Model for Automatic Image Retrieval

    No full text
    International audienceThe majority of the content-based image retrieval (CBIR) systems are restricted to the representation of signal aspects, e.g. color, texture... without explicitly considering the semantic content of images. According to these approaches a sun, for example, is represented by an orange or yellow circle, but not by the term "sun". The signal-oriented solutions are fully automatic, and thus easily usable on substantial amounts of data, but they do not fill the existing gap between the extracted low-level features and semantic descriptions. This obviously penalizes qualitative and quantitative performances in terms of recall and precision, and therefore users' satisfaction. Another class of methods, which were tested within the framework of the Fermi-GC project, consisted in modeling the content of images following a sharp process of human-assisted indexing. This approach, based on an elaborate model of representation (the conceptual graph formalism) provides satisfactory results during the retrieval phase but is not easily usable on large collections of images because of the necessary human intervention required for indexing. The contribution of this paper is twofold: in order to achieve more efficiency as far as user interaction is concerned, we propose to highlight a bond between these two classes of image retrieval systems and integrate signal and semantic features within a unified conceptual framework. Then, as opposed to state-of-the-art relevance feedback systems dealing with this integration, we propose a representation formalism supporting this integration which allows us to specify a rich query language combining both semantic and signal characterizations. We will validate our approach through quantitative (recall-precision curves) evaluations

    The Outline of an 'Intelligent' Image Retrieval Engine

    No full text
    International audienceThe first image retrieval systems hold the advantage of being fully automatic, and thus scalable to large collections of images but are restricted to the representation of low-level aspects (e.g. colors, textures...) without considering the semantic content of images. This obviously compromises interaction, making it difficult for a user to query with precision. The growing need for 'intelligent' systems, i.e. being capable of bridging this semantic gap, leads to new architectures combining multiple characterizations of the image content. This paper presents SIR1, a promising high-level framework featuring semantics, signal color and spatial characterizations. It features a fully-textual query module based on a language manipulating both boolean and quantification operators, therefore making it possible for a user to request elaborate image scenes such as a "covered(mostly grey) sky" or "people in front of a building"
    corecore